Skip to content

deps: remove langchain#1868

Closed
goneri wants to merge 1 commit intomainfrom
goneri/deps-remove-langchain_9285
Closed

deps: remove langchain#1868
goneri wants to merge 1 commit intomainfrom
goneri/deps-remove-langchain_9285

Conversation

@goneri
Copy link
Contributor

@goneri goneri commented Feb 12, 2026

Remove langchain and the associated pipeline to simplify our codebase
and reduce our exposure to problem coming from third party dependencies.


Note

Medium Risk
Replaces the Ollama inference execution path (prompting, parsing, and timeouts) with a new custom HTTP client, which could change runtime behavior despite reducing dependency surface.

Overview
Removes the entire model_pipelines/langchain implementation (configuration, pipeline logic, and tests) and unregisters it from model_pipelines/__init__.py.

Refactors the ollama pipeline to no longer depend on LangChain: introduces a small requests-based OllamaClient, re-implements completions/playbook/role/explanation pipelines with local prompt formatting + response unwrapping helpers, and updates OllamaConfiguration to use the shared BaseConfig/PipelineConfiguration types.

Cleans up packaging by dropping langchain* (and related transitive entries) from pyproject.toml, requirements.txt, and uv.lock, and removes langchain/pipelines.py from pyright includes.

Written by Cursor Bugbot for commit 62919cf. This will update automatically on new commits. Configure here.

Remove langchain and the associated pipeline to simplify our codebase
and reduce our exposure to problem coming from third party dependencies.
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

return OllamaClient(
base_url=self.config.inference_url,
model=model_id,
timeout=self._timeout,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OllamaRoleExplanationPipeline references nonexistent self._timeout attribute

Medium Severity

OllamaRoleExplanationPipeline.get_chat_model references self._timeout, but this class inherits from NopRoleExplanationPipelineNopMetaDataMetaData, none of which initialize _timeout. Only OllamaMetaDataMixin sets that attribute. This will raise an AttributeError at runtime if get_chat_model is called. The old code didn't pass a timeout to OllamaLLM, so this is a regression introduced by the refactor.

Fix in Cursor Fix in Web

@goneri goneri closed this Feb 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments